基于深度学习的模型,例如经常性神经网络(RNNS),已经应用于各种序列学习任务,取得了巨大的成功。在此之后,这些模型越来越多地替换对象跟踪应用程序的经典方法,用于运动预测。一方面,这些模型可以通过所需的更少建模捕获复杂的对象动态,但另一方面,它们取决于参数调谐的大量训练数据。为此,我们介绍了一种用于在图像空间中产生无人机(UAV)的合成轨迹数据的方法。由于无人机,或者相反的四轮压力机是动态系统,它们不能遵循任意轨迹。通过UAV轨迹实现对应于高阶运动的最小变化的平滑度标准的先决条件,可以利用规划侵略性的四轮机会飞行的方法来通过一系列3D航点产生最佳轨迹。通过将这些机动轨迹投影,该轨迹适合于控制二次调节器,实现图像空间,实现了多功能轨迹数据集。为了证明合成轨迹数据的适用性,我们表明,基于RNN的预测模型,在生成的数据上训练,可以在真实的UAV跟踪数据集上优于经典的参考模型。评估是在公开的反UAV数据集完成的。
translated by 谷歌翻译
在诸如对象跟踪的应用中,时间序列数据不可避免地携带缺失的观察。在基于深度学习的模型的成功之后,对于各种序列学习任务,这些模型越来越替换对象跟踪应用中的经典方法,以推断对象的运动状态。虽然传统的跟踪方法可以处理缺失的观察,但默认情况下,大多数深度同行都不适合这一点。迄今为止,本文介绍了一种基于变压器的方法,用于在可变输入长度轨迹数据中处理缺失的观察。通过连续增加所需推理任务的复杂性,间接地形成模型。从再现无噪声轨迹开始,该模型然后学会从嘈杂的输入中推断出来的轨迹。通过提供缺失的令牌,二进制编码的缺失事件,该模型将学习进入缺少数据,并且Infers在其余输入上调整完整的轨迹。在连续缺失事件序列的情况下,该模型则用作纯预测模型。该方法的能力在反映原型对象跟踪方案的综合数据和实际数据上进行了证明。
translated by 谷歌翻译
在诸如跟踪之类的任务中,时间序列数据不可避免地携带缺失的观察。虽然传统的跟踪方法可以处理缺失的观测,但经常性的神经网络(RNNS)旨在在每一步中接收输入数据。此外,RNN的当前解决方案,例如省略缺失的数据或数据归档,不足以解释所产生的不确定性。迄今为止,本文介绍了一种基于RNN的方法,其提供了用于运动状态估计的完整时间过滤周期。卡尔曼滤波器启发方法,可以处理缺少的观察和异常值。为了提供完整的时间过滤周期,扩展了基本RNN以考虑其精度以考虑更新当前状态而采取观察和相关的信念。生成参数化分布以捕获预测状态的RNN预测模型与RNN更新模型组合,这依赖于预测模型输出和当前观察。通过提供具有屏蔽信息的模型,二进制编码的缺失事件,模型可以克服标准技术的限制来处理缺失的输入值。模型能力在反映了原型行人跟踪方案的合成数据上证明了模型能力。
translated by 谷歌翻译
Creativity is an indispensable part of human cognition and also an inherent part of how we make sense of the world. Metaphorical abstraction is fundamental in communicating creative ideas through nuanced relationships between abstract concepts such as feelings. While computer vision benchmarks and approaches predominantly focus on understanding and generating literal interpretations of images, metaphorical comprehension of images remains relatively unexplored. Towards this goal, we introduce MetaCLUE, a set of vision tasks on visual metaphor. We also collect high-quality and rich metaphor annotations (abstract objects, concepts, relationships along with their corresponding object boxes) as there do not exist any datasets that facilitate the evaluation of these tasks. We perform a comprehensive analysis of state-of-the-art models in vision and language based on our annotations, highlighting strengths and weaknesses of current approaches in visual metaphor Classification, Localization, Understanding (retrieval, question answering, captioning) and gEneration (text-to-image synthesis) tasks. We hope this work provides a concrete step towards developing AI systems with human-like creative capabilities.
translated by 谷歌翻译
数据库中的部署机学习(ML)算法是由于现代ML算法的不同计算脚印和多数数据库技术的挑战,每个数据库技术都具有自己的限制性语法。我们介绍了一个基于Apache Spark的微服务编排框架,其扩展了数据库操作以包含Web服务基元。我们的系统可以协调数百台机器的Web服务,并充分利用群集,线程和异步并行性。使用此框架,我们为智能服务提供大规模客户端,如语音,视觉,搜索,异常检测和文本分析。这允许用户将随意使用的智能集成到具有Apache Spark连接器的任何数据存储器中。为了消除网络通信的大多数开销,我们还引入了我们架构的低延迟集装箱版本。最后,我们证明我们调查的服务在各种基准上具有竞争力,并在此框架中展示了两个应用程序来创建智能搜索引擎和实时自动竞赛分析系统。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
Compliance in actuation has been exploited to generate highly dynamic maneuvers such as throwing that take advantage of the potential energy stored in joint springs. However, the energy storage and release could not be well-timed yet. On the contrary, for multi-link systems, the natural system dynamics might even work against the actual goal. With the introduction of variable stiffness actuators, this problem has been partially addressed. With a suitable optimal control strategy, the approximate decoupling of the motor from the link can be achieved to maximize the energy transfer into the distal link prior to launch. However, such continuous stiffness variation is complex and typically leads to oscillatory swing-up motions instead of clear launch sequences. To circumvent this issue, we investigate decoupling for speed maximization with a dedicated novel actuator concept denoted Bi-Stiffness Actuation. With this, it is possible to fully decouple the link from the joint mechanism by a switch-and-hold clutch and simultaneously keep the elastic energy stored. We show that with this novel paradigm, it is not only possible to reach the same optimal performance as with power-equivalent variable stiffness actuation, but even directly control the energy transfer timing. This is a major step forward compared to previous optimal control approaches, which rely on optimizing the full time-series control input.
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译